منابع مشابه
Use of tensor formats in elliptic eigenvalue problems
We investigate approximations by finite sums of products of functions with separated variables to eigenfunctions of certain class of elliptic operators in higher dimensions, and especially conditions providing an exponential decrease of the error with respect to the number of terms. The results of the consistent use of tensor formats can be regarded as a base for a new class of rank truncated i...
متن کاملFundamental Tensor Operations for Large-Scale Data Analysis in Tensor Train Formats
We discuss extended definitions of linear and multilinear operations such as Kronecker, Hadamard, and contracted products, and establish links between them for tensor calculus. Then we introduce effective low-rank tensor approximation techniques including Candecomp/Parafac (CP), Tucker, and tensor train (TT) decompositions with a number of mathematical and graphical representations. We also pro...
متن کاملDiffusion Tensor based Reconstruction of the Ductal Tree
INTRODUCTION: The architecture of the ductal trees was first investigated by Sir Astley Cooper in 1840, using duct injection studies ex-vivo (1). Recently, computer derived tracking of whole-breast ductal trees has been achieved in few human breasts using mastectomy specimens (2). Studying the architecture of the entire ductal trees is very challenging and has not been achieved in vivo, yet (3,...
متن کاملTree-based Space Efficient Formats for Storing the Structure of Sparse Matrices
Sparse storage formats describe a way how sparse matrices are stored in a computer memory. Extensive research has been conducted about these formats in the context of performance optimization of the sparse matrix-vector multiplication algorithms, but memory efficient formats for storing sparse matrices are still under development, since the commonly used storage formats (like COO or CSR) are no...
متن کاملFast Tree-Structured Recursive Neural Tensor Networks
In this project we explore different ways in which we can optimize the computation of training a Tree-structured RNTN, in particular batching techniques in combining many matrix-vector multiplications into matrix-matrix multiplications, and many tensor-vector operations into tensor-matrix operations. We assume that training is performed using mini-batch AdaGrad algorithm, and explore how we can...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SeMA Journal
سال: 2018
ISSN: 2254-3902,2281-7875
DOI: 10.1007/s40324-018-0177-x